American Journal of Infection Control
○ Elsevier BV
Preprints posted in the last 7 days, ranked by how well they match American Journal of Infection Control's content profile, based on 12 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Laskaris, Z.; Baron, S.; Markowitz, S. B.
Show abstract
ObjectivesRising temperatures are a major climate-related hazard for U.S. workers, increasing heat-related illness and a broad range of occupational injuries through indirect pathways often overlooked in economic evaluations. We examined the association between temperature and occupational injury and illness and quantified heat-attributable injuries (including illnesses) and costs in New York State. MethodsWe conducted a time-stratified case-crossover study of 591,257 workers compensation (WC) claims during the warm season (2016-2024). Daily maximum temperature was linked to injury date and county and modeled using natural cubic splines, with effect modification by industry and worker characteristics. ResultsInjury risk increased with temperature, becoming statistically significant at approximately 78{degrees}F. Relative to 65{degrees}F, injury odds increased to 1.06 (95% CI: 1.01-1.10) at 80{degrees}F, 1.12 (1.07-1.18) at 90{degrees}F, and 1.17 (1.11-1.23) at 95{degrees}F. Overall, 5.0% of claims (2,322 annually) were attributable to heat. At temperatures [≥]80{degrees}F, an estimated 1,729 excess injuries occurred annually, generating approximately $46 million in WC costs. An estimated $3.2 million to $36.1 million in medical expenditures were associated with incomplete claims, likely borne outside the WC system. ConclusionsThese findings demonstrate substantial economic costs not fully captured within WC and support workplace heat protections as a cost-containment strategy that can reduce health care spending and strengthen workforce resilience.
Mills, E. A.; Bingham, R.; Nijman, R. G.; Sriskandan, S.
Show abstract
BackgroundAn upsurge in Streptococcus pyogenes infections 2022-2023 highlighted potential benefits of point-of-care tests (POCT) to support clinical pathways, prevent outbreaks, and optimise antibiotic use. ObjectivesWe conducted a pilot research study in a west London paediatric emergency department (ED) to determine whether a molecular POCT had potential to alter management in children who were also having a conventional throat swab taken for culture. MethodsChildren <16 years presenting to ED who had a throat swab requested by a clinician were invited to have a second swab taken for research purposes only. Clinical management was unaffected by the research swab result, which was processed using a molecular POCT that was not approved for use in the host NHS Trust. ResultsPrevalence of streptococcal infection was low during the study (May 2023-June 2025); swab positivity in symptomatic children was 12.8% (6/47). Overall, 38/49 (77.6%) participants who had throat swabs received antibiotics. Of those children recommended to receive antibiotics, 29/38 (76.3%) had a negative POCT. Mean time to reporting of positive throat swab culture results was 3.67 days (range 3-5 days) leading to occasional delay in treatment, although POCT identified positive results within minutes. ConclusionAntibiotic use was frequent and could be avoided or stopped by use of a rule out POCT in over three-quarters of children in the ED, if suspicion of S. pyogenes is the main driver for prescribing. POCT were easy to process and produced immediate results compared with culture, in theory enabling timely decision-making and avoiding treatment delay.
Sheth, E.; Case, L.; Shaw, F.; Dwyer, N.; Poland, J.; Wan, Y.; Larru, B.
Show abstract
Background Pseudomonas aeruginosa is a major cause of healthcare-associated infections in paediatric settings, where its persistence in moist environments such as hospital water and wastewater systems poses a particular risk to neonates and immunocompromised children. Aim The aim of this study was to showcase the long-term survival and transmission of P. aeruginosa in a large tertiary children's hospital in England which is crucial to develop strategies for water-safe care. Methods Environmental P. aeruginosa isolates were collected from taps, sinks, showers, and baths in augmented care areas of a 330-bed tertiary children's hospital built to NHS water-safety standards. Clinical isolates were classified as invasive (blood, cerebrospinal fluid, and bronchoalveolar lavage) or non-invasive (respiratory, urine, ear, abdominal, and rectal surveillance). Variable number tandem repeat (VNTR) profiles and metadata were extracted from PDF reports, de-identified, deduplicated, and curated using Python and R. Findings This retrospective study analysed nine-locus VNTR profiles of 457 P. aeruginosa isolates submitted to the UK Health Security Agency from a large tertiary children's hospital, identifying 56 isolate clusters (each with [≥]2 isolates), of which 19 (34%) contained at least one invasive isolate. The most persistent cluster (Cluster 1, n=20) spanned from July 2016 to September 2024, containing environmental and clinical (invasive and non-invasive) isolates. Conclusion These findings demonstrate long-term persistence of certain genotypes and temporal overlap between environmental and clinical isolates, highlighting the difficulty in detecting and eradicating P. aeruginosa in hospital water and wastewater systems and reinforcing the need for continuous rigorous water system controls.
Gallardo Mejia, A.; Almeida, J.
Show abstract
Urinary tract infections (UTIs) are among the most common infectious diseases worldwide, with Escherichia coli being the predominant uropathogen. The increasing prevalence of extended-spectrum beta-lactamase (ESBL)-producing strains and their association with fluoroquinolone resistance pose a significant challenge to empirical therapy, particularly in community settings. The aim of this study was to determine the epidemiology and predictive factors associated with ESBL-producing E. coli and its concomitant fluoroquinolone resistance in community-acquired clinical isolates. A retrospective cross-sectional study was conducted analyzing 244 clinical E. coli isolates. Demographic and microbiological data were collected, including age, sex, sample type, and antibiotic susceptibility. Associations between variables and ESBL production were assessed using Pearsons chi-squared test, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated. Of the isolates, 165 (68%) were ESBL-producing. A significant association was observed between age group and ESBL production (p < 0.001), with the highest frequency in the 20-39 age group. Most ESBL-positive isolates were obtained from women (73%), although odds ratio (OR) analysis suggested a non-significant trend toward a higher probability in men (OR = 1.29; 95% CI: 0.72-2.31). High rates of fluoroquinolone resistance were identified among the ESBL-producing isolates, with 30% resistance to levofloxacin and 35% to ciprofloxacin (p < 0.001). Urine samples showed the highest concentration of ESBL-positive isolates, with a significant association between sample type and resistance (p < 0.001). The high prevalence of ESBL-producing E. coli and its concomitant resistance to fluoroquinolones highlight a critical challenge for the empirical treatment of urinary tract infections in Mexico, underscoring the need to strengthen antimicrobial use management and local surveillance strategies.
Kamulegeya, R.; Nabatanzi, R.; Semugenze, D.; Mugala, F.; Takuwa, M.; Nasinghe, E.; Musinguzi, D.; Namiiro, S.; Katumba, A.; Ssengooba, W.; Nakatumba-Nabende, J.; Kivunike, F. N.; Kateete, D. P.
Show abstract
BackgroundTuberculosis (TB) remains a leading cause of infectious disease mortality worldwide, and treatment failure contributes to ongoing transmission, drug resistance, and poor clinical outcomes. Artificial intelligence and machine learning approaches have attracted growing interest for predicting tuberculosis treatment outcomes, but the literature is heterogeneous and lacks a comprehensive synthesis. MethodsWe conducted a systematic review and meta-analysis of studies that developed or validated machine learning models to predict TB treatment failure. We searched PubMed/MEDLINE and Embase from January 2000 to October 2025. Studies were eligible if they developed, validated, or implemented an artificial intelligence or machine learning model for the prediction of TB treatment failure or a closely related poor outcome in patients receiving anti-TB treatment. Risk of bias was assessed using the Prediction model Risk Of Bias Assessment Tool. Random-effects meta-analysis was performed to pool area under the curve values, with subgroup analyses and meta-regression to explore heterogeneity. ResultsThirty-four studies were included in the systematic review, of which 19 reported area under the curve values suitable for meta-analysis (total participants, 100,790). Studies were published between 2014 and 2025, with 91% published from 2019 onward. Tree-based methods were the most common algorithm family (52.9%), and multimodal models integrating three or more data types were used in 41.2% of studies. The pooled area under the curve was 0.836 (95% confidence interval 0.799-0.868), with substantial heterogeneity (I{superscript 2} = 97.9%). In subgroup analyses, studies including HIV-positive participants showed lower discrimination (pooled area under the curve 0.748) compared to those excluding them (0.924). Only eight studies (23.5%) performed external validation, and only one study (2.9%) was rated as low risk of bias overall, primarily due to methodological concerns in the analysis domain. Eggers test suggested publication bias (p = 0.024). Major evidence gaps included underrepresentation of high-burden countries, HIV-affected populations, social determinants, pediatric TB, and extrapulmonary disease. ConclusionsMachine learning models for predicting TB treatment failure show promising discrimination but are not yet ready for routine clinical implementation. Performance varies substantially across populations and settings, and methodological limitations, including inadequate validation, poor calibration assessment, and high risk of bias, limit confidence in current estimates. Future research should prioritize rigorous external validation, calibration assessment, and development in underrepresented populations, particularly HIV-affected and high-burden settings. Author SummaryTB kills over a million people annually. While curable, treatment failure remains common and drives ongoing transmission and drug resistance. Researchers increasingly use artificial intelligence and machine learning to predict which patients will fail treatment, but it is unclear if these models are ready for clinical use. We reviewed 34 studies including nearly 1.1 million participants from 22 countries. On average, models correctly distinguished patients who would fail treatment from those who would not 84% of the time, a performance generally considered good. However, this average hid enormous variation. Models developed in populations including HIV-positive people performed substantially worse, suggesting prediction is harder with HIV co-infection. Worryingly, only one study used high-quality methods; 97% had serious flaws in handling missing data, checking calibration, or testing in new populations. Only eight studies validated their models in different settings. To conclude, we found that machine learning is promising in predicting TB treatment failure, but it is not ready for clinical use. Researchers should prioritize validation in high-burden settings, include social determinants, and improve methodological rigor before these tools can help patients.
Tiseo, K.; Dräger, S.; Santhosh Kumar, H.; Alkhazashvili, M.; Hammann, A.; Risch, P.; Willi, R.; Mkhatvari, T.; Fialova, C.; Adlhart, C.; Szabo, D.; Suknidze, M.; Patchkoria, I.; Broger, T.; Ivanova Reipold, E.; Varshanidze, K.; Osthoff, M.
Show abstract
1.Etiological diagnosis of lower respiratory tract infections (LRTIs) relies on sputum or bronchoalveolar lavage (BAL), which may be difficult to obtain or invasive. Exhaled breath aerosol (XBA) sampling offers a non-invasive alternative for pathogen detection. We evaluated the performance of the AveloMask, a face mask-based device designed to capture XBAs for molecular testing. In this prospective paired-sample study, hospitalized adults with pneumonia at three hospitals in Switzerland and Georgia provided an XBA sample using the AveloMask and a lower respiratory tract (LRT) specimen (sputum or BAL). XBA samples were analyzed by multiplex PCR using the Roche LightMix(R) panel and LRT samples were tested using the BioFire(R) FilmArray(R) Pneumonia Panel. Concordance between XBA and LRT samples was assessed using positive percent agreement (PPA), negative percent agreement (NPA), and overall percent agreement (OPA). Ninety-three participants were enrolled and 63 participants provided paired samples. AveloMask sampling identified the dominant pathogen (lowest Ct value in the LRT sample) in 40/47 LRT-positive cases (85.1%). Across all targets, PPA was 61% (95%CI, 50-72%), NPA was 100% (95%CI, 99-100%), and OPA was 95% (95% CI, 92-96%). PPA was higher for bacteria than for viruses and lower PPA was largely driven by reduced detection of low-abundance or co-infecting pathogens. In a subset analysis, AveloMask results showed substantial overlap with standard-of-care testing and could have supported antimicrobial de-escalation. Breath aerosol sampling using the AveloMask enabled non-invasive molecular detection of LRT pathogens in pneumonia cases and may complement conventional standard-of-care testing, particularly when sputum is unavailable.
Hassani, A.; Pecar, K.; Soliman, M.; Bunyon, P.; Ellinger, C.; Tulysewskid, G.; Croft, J.; Carillo, C.; Wewegama, G.; du Plessis-Schneider, S.; Estevez, J. J.
Show abstract
Background Individuals experiencing or at risk of homelessness face substantial barriers to preventive eye care that are poorly addressed by standard service models. Interdisciplinary optometry-social work collaboration offers a rights-based approach to improving engagement and continuity of care. Methods A convergent mixed-methods study was conducted between February and August 2024 at a multidisciplinary community centre. Clients experiencing or at risk of homelessness received integrated optometry and social work assessment and were prioritised as high, medium, or low based on combined clinical and social risk. Social work follow-up was guided by the Triple Mandate and W-Questions framework. Quantitative data were summarised using mean (SD), median [IQR], or n (%). Qualitative case notes were analysed using content analysis with inductive coding and secondary review for consistency. Results A total of 165 clients had priority categories coded (high: 68; medium: 47; low: 154). Demographic data were available for 132 clients (60% male; mean age 49.5 years [SD 16]); 27% had not completed high school, 89% reported weekly income below AUD 1000, and 28% had vision impairment. Two hundred forty-five case-note entries were consolidated into 146 unique records. SMS (46%) and phone calls (38%) were the most documented contact methods, although only 21% of calls were answered; missed calls (13%) and disconnected numbers (7%) were common. Multi-modal contact was more frequently documented for higher-priority clients. Appointment assistance was the most recorded facilitator (71%), while rights-based supports, including interpreter and transport assistance, were infrequently documented (<=5%). Qualitative analysis identified unstable communication, reliance on informal supports, and service fragmentation as key influences on recall outcomes. Conclusion This study supports an interdisciplinary, rights-based optometry-social work model to address barriers to preventive eye care among people experiencing or at risk of homelessness. Embedding structured handovers and tiered recall processes within community-based services may strengthen continuity and accountability for high-priority clients. Future implementation should evaluate outcomes related to equity of reach, service integration, and sustained engagement in care.
Oliveira Roster, K. I.; Rönn, M. M.; Gorenburg, E. R.; Partl, D. K.; Anderegg, N.; Abel zur Wiesch, P.; Au, C.; Kouyos, R. D.; Martinez, F. P.; Low, N.; Grad, Y. H.
Show abstract
Numerous factors may influence the optimal rollout of new gonococcal antibiotics. We compared eight rollout strategies using a gonorrhea transmission model and ranked strategies by the number of gonococcal infections and clinically useful antibiotic lifespan. Rankings were most sensitive to the starting ceftriaxone resistance prevalence and screening frequency.
Murakami, M.; Ohtake, F.
Show abstract
While vaccination conflicts have become apparent, physicians' attitudes toward those with differing views remain unclear. Through an online survey of 492 physicians and 5,252 members of the general public in Japan in February 2026, we investigated attitudes toward four vaccines (influenza, measles, HPV, and COVID-19). Intergroup bias was assessed as ingroup minus outgroup attitudes using a feeling thermometer. Multilevel regression examined associations with agreement group and physician status. Intergroup bias was significantly positive in both agreement and disagreement groups across all vaccine types, and was higher in the agreement group. Physicians exhibited higher intergroup bias than the general public. These findings indicate that vaccination conflict is bidirectional: physicians, often viewed as targets of hostility from vaccine-hesitant individuals, themselves exhibit greater intergroup bias toward those with opposing views. Interventions to raise physicians' awareness of their own bias, alongside communication strategies for vaccine-hesitant individuals, are needed.
Hu, F.; Wei, J.; Muller-Pebody, B.; Hope, R.; Brown, C.; Carreira, H.; Demirjian, A.; Walker, A. S.; Eyre, D. W.
Show abstract
Objectives: To identifiy risk factors for antimicrobial resistance (AMR) in seven pathogen-antimicrobial combinations in patients with cancer and cancer survivors. Methods: Using data from patients with recent or past cancer diagnostic codes in Oxfordshire, UK, we examined associations between 22 potential risk-factors and AMR in blood culture isolates, collected between 1-April-2015 and 31-March-2025. Results: Among 5,975 bacteraemias in 4,365 adults, we analysed 3,141 (52.6%) due to Enterobacterales and 620 (10.4%) due to Enterococcus faecalis/faecium in 2,752 patients. Fourteen risk-factors for antimicrobial-resistant bacteraemia were identified, varying across pathogen-antimicrobial combinations. Compared with no previous antimicrobial susceptibility test result, prior resistance to the same antibiotic in any culture in the last year was strongly associated with AMR across all pathogen-antimicrobial combinations (all p<=0.001). Prior antibiotic exposure and younger age were also positively associated with AMR in four and five combinations, respectively. Cancer type showed modest effects; lymphoid/haematopoietic malignancies were associated with higher odds (vs colorectal cancer) of trimethoprim-sulfamethoxazole-resistant Enterobacterales (aOR=2.07 95%CI 1.40-3.06) and vancomycin-resistant Enterococcus bacteraemia (aOR=6.68, 1.21-36.91). Conclusions: Previous resistance was the greatest risk factor for bacteraemia with AMR in cancer patients and survivors, with prior antibiotic exposure and age also contributing. Lymphoid/haematopoietic malignancies increased risk of resistance to specific antimicrobials. Keywords: antimicrobial resistance, bacteraemia, cancer, risk factors
Chaves, E. T.; Teunis, J. T.; Digmayer Romero, V. H.; van Nistelrooij, N.; Vinayahalingam, S.; Sezen-Hulsmans, D.; Mendes, F. M.; Huysmans, M.-C.; Cenci, M. S.; Lima, G. d. S.
Show abstract
Background: Radiographic detection of caries lesions adjacent to restorations is challenging due to limitations of two-dimensional imaging and difficulties distinguishing true lesions from restorative or anatomical radiolucencies. Artificial intelligence (AI)-based clinical decision support systems (CDSSs) have been introduced to assist radiographic interpretation; however, different AI tools may yield variable diagnostic outputs, and their comparative performance remains unclear. Objective: To compare the diagnostic performance of commercial and experimental AI algorithms for detecting secondary caries lesions on bitewings. Methods: This cross-sectional diagnostic accuracy study included 200 anonymized bitewings comprising 885 restored tooth surfaces. A consensus group reference standard identified all surfaces with a caries lesion and classified each lesion by type (primary/secondary) and depth (enamel-only/dentin-involved). Five commercial (Second Opinion, CranioCatch, Diagnocat, DIO Inteligencia, and Align X-ray Insights) and three experimental (Mask R-CNN-based and Mask DINO-based) systems were tested. Diagnostic performance was expressed through sensitivity, specificity, and overall accuracy (95% CI). Comparisons used generalized estimating equations, adjusted for clustered data. Results: Specificity was high across all systems (0.957-0.986), confirming accurate recognition of non-carious surfaces, whereas sensitivity was moderate (0.327-0.487), reflecting frequent missed detections of enamel and dentin lesions. Accuracy ranged from 0.882 to 0.917, with no significant differences among models (p >= 0.05). Confounding factors, such as radiographic overlapping, marginal restoration defects, and cervical artifacts, were the main sources of misclassification. Conclusions: AI algorithms, regardless of architecture or commercial status, showed similar diagnostic capabilities and a conservative detection profile, favoring specificity over sensitivity. Improvements in dataset diversity, labeling precision, and explainability may further enhance reliability for secondary caries detection. Clinical Significance: AI-based CDSSs assist clinicians by providing consistent detection. Their high specificity is particularly valuable in minimizing unnecessary invasive treatments (overtreatment), though they should be used as adjuncts rather than a replacement for expert judgment.
Armstrong, M.; Williams, H.; Fernandez Faith, E.; Ni, A.; Xiang, H.
Show abstract
BackgroundLasers have wide applications in medicine and dermatology, but are associated with pain and anxiety, particularly in younger patients. Pain mitigation is often limited to topical anesthetics in the outpatient setting. Distraction techniques are limited by the need for ocular protection, which can include adhesive eye patches that can completely occlude vision. Virtual reality is effective at managing procedural pain and anxiety under other short medical procedures and is a promising tool for this population. ObjectiveThis trial aims to assess the safety, feasibility, and efficacy of Virtual Reality Pain Alleviation Therapeutic (VR-PAT) for pain management during outpatient laser procedures. Methods40 patients requiring outpatient laser therapy for at least two sessions will be recruited from a pediatric hospital in the midwestern United States for this crossover randomized, two-arm clinical trial with a 1:1 allocation ratio. During the first laser visit, the participant will be randomly assigned to either play the VR-PAT game during their procedure or wear the headset with a dark screen. Participants will answer questions about their pain (Numeric Rating Scale (NRS) 0-10), anxiety (State Trait Anxiety Inventory for Children, NRS 0-10, Modified Yale Preoperative Anxiety Scale (mYPAS)), and pain medication usage. Those playing the VR-PAT will additionally report simulator sickness symptoms and their experience playing the game. At their second laser visit, participants will crossover to the opposite intervention from their first visit. The primary outcomes are the difference in self-reported pain and anxiety between the two interventions. Feasibility outcomes include the proportion of screened patients who are eligible, consent, and complete both visits and adverse events reported. To evaluate the efficacy of pain reduction, composite scores of pain score, pain medication will be calculated for each laser visit. To evaluate the efficacy of anxiety reduction, the change of mYPAS scores will be compared between control and VR groups at each visit using Wilcoxon rank sum tests. All statistical analyses will follow the intention-to-treat principle in regard to intervention assignment at each visit. ResultsThe study was funded in January 2023 and began enrollment at that time. A total of n=44 participants were recruited and data collection was completed in November 2025, with n=40 subjects completing both visits. The sample was balanced with n=40 subjects using the intervention and participating in the control condition. The age range of the complete sample was 6 to 21 years at recruitment and was 55% female sex. Data analysis is in progress with final results planned for June 2026. ConclusionsFindings from this innovative randomized clinical trial will provide early evidence on the efficacy of the VR-PAT for reducing self-reported pain and anxiety during outpatient laser procedures. The results from this trial will inform a large-scale, multisite study. Trial RegistrationClinicalTrials.gov: NCT05645224 [https://clinicaltrials.gov/study/NCT05645224]
Khan, M.; Islam, A. M.; Abdel-Aty, Y.; Rosow, D.; Mallur, P.; Johns, M.; Rosen, C. A.; Bensoussan, Y. E.
Show abstract
ObjectiveOnly preliminary investigations on the use of the 445 nanometer wavelength blue light laser (BLL) for various laryngeal pathologies have been described. Currently, no standard exists for reporting treatment technique and tissue effect with this modality. Here, we aim to establish and validate a classification system to describe laser-induced tissue effects. Study DesignRetrospective video-based study for classification development and reliability validation. MethodsVideo recordings from procedures performed with the BLL by multiple academic laryngologists were retrospectively reviewed. A preliminary 6-point classification (BLL 1-6) was developed based on expert consensus. Thirteen additional procedural clips were independently rated utilizing the classification schema to assess perceived tissue effect, and measure inter- and intra-rate reliability. ResultsThe final 5-point classification system (BLL 1-5) included angiolysis, blanching, tissue vaporization, ablation with mechanical tissue removal, and cutting. The consensus of the combined reviewers in rating all cases was 89% (58 of 65). Complete consensus was not achieved in 11% (7/65) of cases. Of those incorrect, 57% (4/7) were of clips illustrating the BLL-2 classification. Intra-rater reliability amongst the reviewers was 100%. ConclusionTissue effect of the 445 nm blue light laser can reliably be standardized with this proposed classification system. This rating system can be used to facilitate future systematic study of outcomes and effective communication between laryngologists and trainees.
Claus, L.; McNamara, M.; Oser, C.; Fogle, C.; Canine, B.
Show abstract
Cardiovascular disease (CVD) remains the leading cause of mortality in the United States, despite being largely preventable through effective management of risk factors. This study evaluates the impact of Phase II cardiac rehabilitation (CR) on functional capacity and quality of life, using data from the Montana Outcomes Project Cardiac Rehabilitation Registry. Functional capacity improvements were assessed via the six-minute walk test (6MWT) and Dartmouth COOP questionnaire, with statistical analyses exploring the influence of CR session attendance, demographic factors, and referring diagnoses. Results demonstrated significant gains in 6MWT, with a mean improvement of 330.73 feet (p < .0001), and quality of life scores across all subgroups. A dose-response relationship was observed, indicating greater improvements with increased CR sessions (p < .0001), though diminishing returns were observed beyond 24-35 visits. Demographic factors and complex conditions influenced outcomes, underscoring the need for tailored strategies to enhance CR access and effectiveness. These findings highlight the critical role of CR in improving patient outcomes and emphasize the importance of addressing barriers to participation in underserved populations.
da Luz, C. C.; Sorbello, C. C. J.; Epifanio, E. A.; dos Santos, C. d. A.; Brandi, S.; Guerra, J. C. d. C.; Wolosker, N.
Show abstract
Abstract: Background: Vascular access is essential in treating patients undergoing prolonged endovenous therapy such as chemotherapy, antibiotics, and parenteral nutrition. Since the 1990s, when PICCs (peripherally inserted central catheters) appeared, vascular access options have expanded significantly, revolutionizing the treatment landscape for all types of patients. Objective: To analyze and describe the profile of the use of PICCs in a Brazilian quaternary hospital over 10 years with data collected by the infusion therapy team. Evaluating the number of PICCs implanted over the years, patients epidemiology and clinical characteristics, insertion details, associated complications, and the reason for removal. Methods: A retrospective cohort study that employs a quantitative, non-experimental approach to classify and statistically analyze past events associated with 21,652 PICCs implanted from January 2012 to December 2021 in a quaternary hospital at Sao Paulo - Brazil. All the catheters were implanted, and the data was collected by a team of nurses specializing in infusion therapy. We analyzed the number of catheters implanted over the years, insertion characteristics, patients epidemiology and clinical data, possible associated complications, and the reason for removal. Statistical analyses were conducted using R software (version 4.4.1) and SPSS (version 29) for Windows (IBM Corp, Armonk, NY). Results: During the specified period, 21,652 catheters were analyzed. The patients gender distribution was nearly balanced (48.2% versus 51.8%), and the average age was 66 years. Cardiovascular and metabolic issues were the most common comorbidities, and between 2020 and 2021, 29.3% of the sample tested positive for COVID-19. The most common location of hospitalization and implantation was the medical-surgical clinic (31.6% - 41.4%), and the most used type of catheter was the Power Picc (83.9%). The estimated complication incidence density is 2.94 complications per 1,000 catheter-days. Almost all the PICCs (98,2%) were adequately located at the cavo-atrial junction after the first attempt, 82.2% of catheters were removed after therapy, and the median duration of catheter use was 12 days. Conclusion: PICCs are widely employed for drug infusion, with their use growing progressively due to specialized teams greater availability and training. The high efficiency of these devices with a relatively low risk of complications already observed in previous studies was reinforced by the findings of this study of more than 20,000 catheters.
Heintzman, A. A.; Cumbe, Z. A.; Cumbane, V.; Cumming, O.; Holcomb, D.; Keenum, I.; Knee, J.; Monteiro, V.; Nala, R.; Brown, J.; Capone, D.
Show abstract
Wastewater surveillance is increasingly used for antimicrobial resistance (AMR) monitoring in urban environments, but low-resource settings often lack a piped sewerage system. Instead, coprophagous flies--flies that ingest feces--may serve as composite samplers for monitoring fecal wastes present in terrestrial environments. We evaluated whether the class 1 integron-integrase gene intI1 was associated with genetic markers of AMR and fecal source tracking markers (FST) in coprophagous flies collected from latrine entrances and food preparation areas in low-income urban Maputo, Mozambique. We quantified intI1, an enteric 16S rRNA target (for normalization), three FST markers, and 30 ARG targets using qPCR. We normalized concentrations of intI1 and each target to enteric 16S rRNA. We fit linear mixed models with a random intercept for housing compound to estimate within-fly associations between log10 relative abundance of intI1 and log10 relative abundance of each target with and without adjustment for fly taxonomic group, capture location, and standardized fly mass. We also modeled per-fly unique ARG count (i.e., number of ARG targets detected) using Poisson regression. Of 188 flies assayed, 176 passed internal controls; intI1 and enteric 16S rRNA were detected in 95% and 96% of flies, respectively. Higher relative abundance of intI1 was positively associated with ARG and FST targets, with the strongest associations observed for sulfonamide-(sul1: {beta} = 0.87; 95% CI: 0.81, 0.94; sul2: {beta} = 0.81; 95% CI: 0.73, 0.89), tetracycline- (tetA: {beta} = 0.78; 95% CI: 0.70, 0.85; tetB: {beta} = 0.69; 95% CI: 0.60, 0.79), and trimethoprim-related (dfrA17: {beta} = 0.78; 95% CI: 0.70, 0.86) genes. Associations with FST markers were weaker (i.e., human mtDNA: {beta} = 0.46; 95% CI: 0.37, 0.55; human-associated Bacteroides: {beta} = 0.34; 95% CI: 0.25, 0.43). Higher relative abundance of intI1 was also associated with a greater number of ARGs detected: each 10-fold increase in intI1 was associated with an 8% higher expected unique ARG count (aRR=1.08, 95% CI: 1.04-1.12). These findings support the need for further research across different settings exploring intI1 carried by coprophagous flies as a potential standardized screening target for AMR surveillance in unsewered terrestrial environments.
Goodman, M. L.; Maknojia, S.; Sciba, A.; Robertson, D.; Keiser, P.
Show abstract
Background: Opioid-related mortality in Texas has escalated dramatically, increasingly driven by illicitly manufactured fentanyl. To address local surges in mortality, the Galveston County Health District deployed the Galveston County Opioid Defense Effort (GCODE) in July 2023, leveraging digitally integrated surveillance data from emergency medical services (EMS) and the Medical Examiner to provide targeted naloxone distribution in identified overdose hot spots. Methods: Using a segmented interrupted time series (ITS) design and Poisson regression with robust standard errors, we evaluated the population-level impact of GCODE on opioid-involved mortality through the end of 2025. Data were sourced from the Galveston Area Ambulance Authority (GAAA) and vital statistics (ICD-10 codes). We assessed mortality trajectory changes, the observed fatality ratio among EMS-detected opioid events (the Survival Gap), and demographic and geographic covariates. Results: The Poisson ITS model included 519 weekly observations (N = 14,827 tract-weeks across 101 census tracts). Pre-intervention, opioid mortality increased by 0.16% weekly (IRR = 1.0016; 95% CI: 1.000-1.003; p = 0.011). Following GCODE deployment, the mortality trajectory reversed to a sustained 0.55% weekly decrease (IRR = 0.9945; 95% CI: 0.990-0.999; p = 0.021). The observed fatality ratio among EMS-detected events declined from 7.59% (preintervention mean; SD = 0.111) to 1.71% (post-intervention; SD = 0.042; Chi^2 = 19.824; p = 0.0001). Opioid decedents were significantly younger than the general mortality population (OR = 0.945 per year of age; p < 0.001), and were descriptively more likely to lack documented race/ethnicity data (41.23% vs. 8.27% Unknown; p < 0.001), limiting equity analysis. Conclusions: The findings are consistent with GCODE having meaningfully reduced opioid mortality by substantially lowering event-level lethality. These results suggest that targeted, digitally coordinated harm reduction can decouple overdose incidence from fatal outcomes, with implications for harm reduction program design in structurally constrained environments.
Deng, M. D. A.; Alayande, B. T.; Sheferaw, E. D.; Ngutete Mukundwa, P.; Fofanah, T.; Peter, M. B.; Kuron, D.; Bekele, A.; Dau, A. D.
Show abstract
BackgroundAccess to safe, equitable, and affordable surgical and anesthesia care is critical to reducing the burden of surgical diseases in Africa. To understand the state of access in South Sudan, we conducted a baseline assessment of surgical services in Central Equatoria State (CES) in May 2024. ObjectivesThis study aimed to survey public healthcare facilities in CES capable of providing essential surgical services. We used the capacity to perform cesarean section, laparotomy, and open fracture management--Bellwether procedures--as a proxy for assessing workforce, infrastructure, financing, information management, and service delivery. MethodsWe used a validated and contextualized Surgical Assessment Tool developed by the Harvard Program on Global Surgery and Social Change and the World Health Organization. Data were collected at the facility level and summarized descriptively using percentages, means (standard deviations), medians (minimum, maximum), and visualized in graphs, charts, and tables. ResultsAll three public health facilities assessed could perform Bellwether procedures for their catchment populations. However, workforce availability, financing, and surgical infrastructure were major constraints. The surgical workforce density was 2.27 surgical, anesthesia, and obstetric specialists per 100,000 population. Specialized procedures--such as repair of cleft lip and palate, clubfoot, and hydrocephalus shunt--were unavailable at all sites. None had magnetic resonance imaging (MRI) machines. The total average annual facility budget was $918,850, ranging from $3,960 to $800,000 at the teaching hospital--insufficient for proper operations. ConclusionWhile Bellwether procedures are routinely performed, access to quality and affordable care is compromised by deficits in workforce, financing, and infrastructure. We recommend that the Ministry of Health scale this survey nationally and develop a surgical policy and strategic plan focused on improving infrastructure, workforce, and financing for surgical and anesthesia care in South Sudan.
Musonda, R.; Ito, K.; Omori, R.; Ito, K.
Show abstract
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has continuously evolved since its emergence in the human population in 2019. As of 1st August 2025, more than 1,700 Omicron subvariants have been designated by the Pango nomenclature system. The Pango nomenclature system designates a new lineage based on genetic and epidemiological information of SARS-CoV-2 strains. However, there is a possibility that strains that have similar genetic backgrounds and the same phenotype are given different Pango lineage names. In this paper, we propose a new algorithm, called FindPart-w, which can identify groups of viral lineages that share the same relative effective reproduction numbers. We introduced a new lineage replacement model, called the constrained RelRe model, which constrains groups of lineages to have the same relative effective reproduction numbers. The FindPart-w algorithm searches the equality constraints that minimise the Akaike Information Criterion of constrained RelRe models. Using hypothetical observation count data created by simulation, we found that the FindPart-w algorithm can identify groups of lineages having the same relative effective reproduction number in a practical computational time. Applying FindPart-w to actual real-world data of time-stamped lineage counts from the United States, we found that the Pango lineage nomenclature system may have given different lineage names to SARS-CoV-2 strains even if they have the same relative effective reproduction number and similar genetic backgrounds. In conclusion, this study showed that viruses that had the same relative effective reproduction number were identifiable from temporal count data of viral sequences. These findings will contribute to the future development of lineage designation systems that consider both genetic backgrounds and transmissibilities of lineages.
MacLean, E. L.; Ma, T. T.; Chuong, L. H.; Minh, K. H.; Hoddinott, G.; Pham, Y. N.; Tiep, H. T.; Nguyen, T.-A.; Fox, G.; Nguyen, N. T.
Show abstract
Introduction Improved diagnostics are needed for people at risk of tuberculosis, especially adolescents. Tongue swab (TS) molecular testing has emerged as a promising strategy for tuberculosis diagnosis. We evaluated diagnostic accuracy and acceptability of Xpert MTB/RIF Ultra (Xpert) using TS samples for tuberculosis detection among adolescents. Methods We conducted a cross-sectional diagnostic accuracy study with consecutive recruitment in Vietnam. Adolescents aged 10-19 who were recommended to undergo investigation for tuberculosis and had not received tuberculosis treatment in the past years were eligible. Participants provided TS and sputum samples and completed a structured survey regarding sampling experiences. TS was tested on Xpert, with sputum tested on Xpert and liquid culture. We utilised a composite reference standard of a positive result on sputum Xpert or sputum culture to define disease status. Sensitivity, specificity, and diagnostic yield were calculated for TS Xpert. Results From July to December 2025, we enrolled 225 adolescents from Can Tho and An Giang provinces in southern Vietnam. Fewer than half (96/225, 43%) the participants exhibited a tuberculosis -like symptom, and the majority (157/225, 70%) were close contacts of a person recently diagnosed with tuberculosis. TS were collected from all adolescents, while 116 (52%) could provide mucopurulent sputum. Tuberculosis prevalence was relatively low (12/225, 5.3%). TS Xpert sensitivity (90% CI) and specificity (90% CI) were 58.3% (35.6, 78.0) and 99.5% (97.9, 99.9), respectively. Diagnostic yield among all diagnosed was 58.3% (7/12). TS sampling was highly acceptable to adolescents; the short time and simplicity of collecting TS were considered favourably. Conclusions The sensitivity and diagnostic yield of TS Xpert was relatively low among adolescents recommended for tuberculosis investigation, which includes asymptomatic individuals who may not provide high quality sputum. Specificity was excellent, and everyone could provide a TS. TS high acceptability indicates it remains a promising sample for diagnostic algorithms.